This paper presents a new safety specification method that is robust againsterrors in the probability distribution of disturbances. Our proposeddistributionally robust safe policy maximizes the probability of a systemremaining in a desired set for all times, subject to the worst possibledisturbance distribution in an ambiguity set. We propose a dynamic gameformulation of constructing such policies and identify conditions under which anon-randomized Markov policy is optimal. Based on this existence result, wedevelop a practical design approach to safety-oriented stochastic controllerswith limited information about disturbance distributions. This control methodcan be used to minimize another cost function while ensuring safety in aprobabilistic way. However, an associated Bellman equation involvesinfinite-dimensional minimax optimization problems since the disturbancedistribution may have a continuous density. To resolve computational issues, wepropose a duality-based reformulation method that converts theinfinite-dimensional minimax problem into a semi-infinite program that can besolved using existing convergent algorithms. We prove that there is no dualitygap, and that this approach thus preserves optimality. The results of numericaltests confirm that the proposed method is robust against distributional errorsin disturbances, while a standard stochastic safety specification tool is not.
展开▼